-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[AMDGPU][Attributor] Don't run AAAddressSpace for graphics functions
#108560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
@llvm/pr-subscribers-backend-amdgpu Author: Shilei Tian (shiltian) ChangesFull diff: https://github.com/llvm/llvm-project/pull/108560.diff 1 Files Affected:
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp b/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp
index 687a7339da379d..37c8b043aca198 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp
@@ -1077,19 +1077,22 @@ static bool runImpl(Module &M, AnalysisGetter &AG, TargetMachine &TM,
addPreloadKernArgHint(*F, TM);
}
- for (auto &I : instructions(F)) {
- if (auto *LI = dyn_cast<LoadInst>(&I)) {
- A.getOrCreateAAFor<AAAddressSpace>(
- IRPosition::value(*LI->getPointerOperand()));
- } else if (auto *SI = dyn_cast<StoreInst>(&I)) {
- A.getOrCreateAAFor<AAAddressSpace>(
- IRPosition::value(*SI->getPointerOperand()));
- } else if (auto *RMW = dyn_cast<AtomicRMWInst>(&I)) {
- A.getOrCreateAAFor<AAAddressSpace>(
- IRPosition::value(*RMW->getPointerOperand()));
- } else if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(&I)) {
- A.getOrCreateAAFor<AAAddressSpace>(
- IRPosition::value(*CmpX->getPointerOperand()));
+ // Don't bother to run AAAddressSpace for graphics.
+ if (!AMDGPU::isGraphics(F->getCallingConv())) {
+ for (auto &I : instructions(F)) {
+ if (auto *LI = dyn_cast<LoadInst>(&I)) {
+ A.getOrCreateAAFor<AAAddressSpace>(
+ IRPosition::value(*LI->getPointerOperand()));
+ } else if (auto *SI = dyn_cast<StoreInst>(&I)) {
+ A.getOrCreateAAFor<AAAddressSpace>(
+ IRPosition::value(*SI->getPointerOperand()));
+ } else if (auto *RMW = dyn_cast<AtomicRMWInst>(&I)) {
+ A.getOrCreateAAFor<AAAddressSpace>(
+ IRPosition::value(*RMW->getPointerOperand()));
+ } else if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(&I)) {
+ A.getOrCreateAAFor<AAAddressSpace>(
+ IRPosition::value(*CmpX->getPointerOperand()));
+ }
}
}
}
|
|
No test and I wouldn't bother skipping it? Do compute shaders really not have flat addresses? |
That's what you said:
|
If we don't want to bother skipping it, do we want to do the same thing for |
I'm pretty sure that field is just broken and should be removed. It is initialized by whatever function happens to be first when constructing the TTIImpl. The TTI is owned by the subtarget, which is unique per subtarget, not function |
Maybe this is wrong, there are other function dependent fields here |
It is per-function, which is created by a target dependent approach, such as |
|
This is handled directly in |

No description provided.